Goto

Collaborating Authors

 class granularity


Class Granularity: How richly does your knowledge graph represent the real world?

Seo, Sumin, Cheon, Heeseon, Kim, Hyunho

arXiv.org Artificial Intelligence

To effectively manage and utilize knowledge graphs, it is crucial to have metrics that can assess the quality of knowledge graphs from various perspectives. While there have been studies on knowledge graph quality metrics, there has been a lack of research on metrics that measure how richly ontologies, which form the backbone of knowledge graphs, are defined or the impact of richly defined ontologies. In this study, we propose a new metric called Class Granularity, which measures how well a knowledge graph is structured in terms of how finely classes with unique characteristics are defined. Furthermore, this research presents potential impact of Class Granularity in knowledge graph's on downstream tasks. In particular, we explore its influence on graph embedding and provide experimental results. Additionally, this research goes beyond traditional Linked Open Data comparison studies, which mainly focus on factors like scale and class distribution, by using Class Granularity to compare four different LOD sources.


Action Recognition With Coarse-to-Fine Deep Feature Integration and Asynchronous Fusion

Lin, Weiyao (Shanghai Jiao Tong University) | Zhang, Chongyang (Shanghai Jiao Tong University) | Lu, Ke (University of Chinese Academy of Sciences) | Sheng, Bin (Shanghai Jiao Tong University) | Wu, Jianxin (Nanjing University) | Ni, Bingbing (Shanghai Jiao Tong University) | Liu, Xin (Shenzhen Tencent Computer System Co.) | Xiong, Hongkai (Shanghai Jiao Tong University)

AAAI Conferences

Action recognition is an important yet challenging task in computer vision. In this paper, we propose a novel deep-based framework for action recognition, which improves the recognition accuracy by: 1) deriving more precise features for representing actions, and 2) reducing the asynchrony between different information streams. We first introduce a coarse-to-fine network which extracts shared deep features at different action class granularities and progressively integrates them to obtain a more accurate feature representation for input actions. We further introduce an asynchronous fusion network. It fuses information from different streams by asynchronously integrating stream-wise features at different time points, hence better leveraging the complementary information in different streams. Experimental results on action recognition benchmarks demonstrate that our approach achieves the state-of-the-art performance.